core node
- North America > United States > Maryland (0.04)
- North America > Canada > Quebec > Montreal (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Africa > Senegal > Kolda Region > Kolda (0.04)
- North America > United States > Maryland (0.04)
- North America > Canada > Quebec > Montreal (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- (2 more...)
- Telecommunications > Networks (0.34)
- Information Technology > Networks (0.34)
Towards Reliable Benchmarking: A Contamination Free, Controllable Evaluation Framework for Multi-step LLM Function Calling
Maekawa, Seiji, Hassell, Jackson, Pezeshkpour, Pouya, Mitchell, Tom, Hruschka, Estevam
As language models gain access to external tools via structured function calls, they become increasingly more capable of solving complex, multi-step tasks. However, existing benchmarks for tool-augmented language models (TaLMs) provide insufficient control over factors such as the number of functions accessible, task complexity, and input size, and remain vulnerable to data contamination. We present FuncBenchGen, a unified, contamination-free framework that evaluates TaLMs by generating synthetic multi-step tool-use tasks. The key idea is to cast tool use as traversal over a hidden function-dependency DAG where nodes are function calls and an edge between nodes represents one function consuming the output of another. Given a set of external function schemas, initial variable values, and a target variable, models must compose the correct call sequence to compute the target variable. FuncBenchGen allows users to precisely control task difficulty (e.g., graph size, dependency depth, and distractor functions) while avoiding data leakage. We apply our FuncBenchGen framework to evaluate seven LLMs on tool use tasks of varying difficulty. Reasoning-optimized models consistently outperform general-purpose models with GPT-5 significantly outperforming other models. Performance declines sharply as dependency depth increases. Furthermore, connected irrelevant functions prove especially difficult to handle. We find that strong models often make syntactically valid function calls but propagate incorrect or stale argument values across steps, revealing brittle state tracking by LLMs in multi-turn tool use. Motivated by this observation, we introduce a simple mitigation strategy that explicitly restates prior variable values to the agent at each step. Surprisingly, this lightweight change yields substantial gains across models. e.g., yielding a success rate improvement from 62.5% to 81.3% for GPT-5.
- Asia > Middle East > UAE > Abu Dhabi Emirate > Abu Dhabi (0.14)
- North America > United States > New Mexico > Bernalillo County > Albuquerque (0.04)
- North America > United States > Florida > Miami-Dade County > Miami (0.04)
- (2 more...)
Hierarchical graph sampling based minibatch learning with chain preservation and variance reduction
Graph sampling based Graph Convolutional Networks (GCNs) decouple the sampling from the forward and backward propagation during minibatch training, which exhibit good scalability in terms of layer depth and graph size. We propose HIS_GCNs, a hierarchical importance graph sampling based learning method. By constructing minibatches using sampled subgraphs, HIS_GCNs gives attention to the importance of both core and periphery nodes/edges in a scale-free training graph. Specifically, it preserves the centrum of the core to most minibatches, which maintains connectivity between periphery nodes, and samples periphery edges without core node interference, in order to keep more long chains composed entirely of low-degree nodes in the same minibatch. HIS_GCNs can maximize the discrete Ricci curvature (i.e., Ollivier-Ricci curvatures) of the edges in a subgraph that enables the preservation of important chains for information propagation, and can achieve a low node embedding variance and a high convergence speed. Diverse experiments on Graph Neural Networks (GNNs) with node classification tasks confirm superior performance of HIS_GCNs in both accuracy and training time. Open sourced code (https://github.com/HuQiaCHN/HIS-GCN).
GRID: Protecting Training Graph from Link Stealing Attacks on GNN Models
Lou, Jiadong, Yuan, Xu, Zhang, Rui, Yuan, Xingliang, Gong, Neil, Tzeng, Nian-Feng
Graph neural networks (GNNs) have exhibited superior performance in various classification tasks on graph-structured data. However, they encounter the potential vulnerability from the link stealing attacks, which can infer the presence of a link between two nodes via measuring the similarity of its incident nodes' prediction vectors produced by a GNN model. Such attacks pose severe security and privacy threats to the training graph used in GNN models. In this work, we propose a novel solution, called Graph Link Disguise (GRID), to defend against link stealing attacks with the formal guarantee of GNN model utility for retaining prediction accuracy. The key idea of GRID is to add carefully crafted noises to the nodes' prediction vectors for disguising adjacent nodes as n-hop indirect neighboring nodes. We take into account the graph topology and select only a subset of nodes (called core nodes) covering all links for adding noises, which can avert the noises offset and have the further advantages of reducing both the distortion loss and the computation cost. Our crafted noises can ensure 1) the noisy prediction vectors of any two adjacent nodes have their similarity level like that of two non-adjacent nodes and 2) the model prediction is unchanged to ensure zero utility loss. Extensive experiments on five datasets are conducted to show the effectiveness of our proposed GRID solution against different representative link-stealing attacks under transductive settings and inductive settings respectively, as well as two influence-based attacks. Meanwhile, it achieves a much better privacy-utility trade-off than existing methods when extended to GNNs.
Uncovering the hidden core-periphery structure in hyperbolic networks
Ansari, Imran, Yadav, Pawanesh, Sahni, Niteesh
The hyperbolic network models exhibit very fundamental and essential features, like small-worldness, scale-freeness, high-clustering coefficient, and community structure. In this paper, we comprehensively explore the presence of an important feature, the core-periphery structure, in the hyperbolic network models, which is often exhibited by real-world networks. We focused on well-known hyperbolic models such as popularity-similarity optimization model (PSO) and S1/H2 models and studied core-periphery structures using a well-established method that is based on standard random walk Markov chain model. The observed core-periphery centralization values indicate that the core-periphery structure can be very pronounced under certain conditions. We also validate our findings by statistically testing for the significance of the observed core-periphery structure in the network geometry. This study extends network science and reveals core-periphery insights applicable to various domains, enhancing network performance and resiliency in transportation and information systems.
Core-Intermediate-Peripheral Index: Factor Analysis of Neighborhood and Shortest Paths-based Centrality Metrics
The topological importance of nodes in complex networks has been analyzed in the literature from the perspectives of core-periphery structure and centrality metrics. While the core-periphery structure analysis of a network is more of a qualitative approach (and sometimes quantitative) at a mesoscopic level, centrality metrics are designed to quantify the topological importance of individual nodes in a network. The core-periphery analysis of a network is aimed at categorizing a node as either a core node or a peripheral node. The current status quo in the literature on the definitions of core nodes and peripheral nodes is that the core nodes need to be of larger degree and form a highly dense backbone to which the low degree peripheral nodes are connected to; the peripheral nodes are expected to be not connected to other peripheral nodes as well. Some of the works (e.g., [1-3]) in the literature have suggested that high degree nodes need not always be core nodes; but they still analyze the core-periphery structure and quantify the extent of coreness of a node within the realms of the above model.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.14)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- North America > United States > California > Monterey County > Pacific Grove (0.04)
- (2 more...)
Core-Periphery Principle Guided Redesign of Self-Attention in Transformers
Yu, Xiaowei, Zhang, Lu, Dai, Haixing, Lyu, Yanjun, Zhao, Lin, Wu, Zihao, Liu, David, Liu, Tianming, Zhu, Dajiang
Designing more efficient, reliable, and explainable neural network architectures is critical to studies that are based on artificial intelligence (AI) techniques. Previous studies, by post-hoc analysis, have found that the best-performing ANNs surprisingly resemble biological neural networks (BNN), which indicates that ANNs and BNNs may share some common principles to achieve optimal performance in either machine learning or cognitive/behavior tasks. Inspired by this phenomenon, we proactively instill organizational principles of BNNs to guide the redesign of ANNs. We leverage the Core-Periphery (CP) organization, which is widely found in human brain networks, to guide the information communication mechanism in the self-attention of vision transformer (ViT) and name this novel framework as CP-ViT. In CP-ViT, the attention operation between nodes is defined by a sparse graph with a Core-Periphery structure (CP graph), where the core nodes are redesigned and reorganized to play an integrative role and serve as a center for other periphery nodes to exchange information. We evaluated the proposed CP-ViT on multiple public datasets, including medical image datasets (INbreast) and natural image datasets. Interestingly, by incorporating the BNN-derived principle (CP structure) into the redesign of ViT, our CP-ViT outperforms other state-of-the-art ANNs. In general, our work advances the state of the art in three aspects: 1) This work provides novel insights for brain-inspired AI: we can utilize the principles found in BNNs to guide and improve our ANN architecture design; 2) We show that there exist sweet spots of CP graphs that lead to CP-ViTs with significantly improved performance; and 3) The core nodes in CP-ViT correspond to task-related meaningful and important image patches, which can significantly enhance the interpretability of the trained deep model.
- North America > United States > Texas (0.04)
- North America > United States > Georgia > Clarke County > Athens (0.04)
- Health & Medicine > Therapeutic Area > Neurology (0.68)
- Health & Medicine > Diagnostic Medicine > Imaging (0.48)
CP-CNN: Core-Periphery Principle Guided Convolutional Neural Network
Zhao, Lin, Dai, Haixing, Wu, Zihao, Zhu, Dajiang, Liu, Tianming
The evolution of convolutional neural networks (CNNs) can be largely attributed to the design of its architecture, i.e., the network wiring pattern. Neural architecture search (NAS) advances this by automating the search for the optimal network architecture, but the resulting network instance may not generalize well in different tasks. To overcome this, exploring network design principles that are generalizable across tasks is a more practical solution. In this study, We explore a novel brain-inspired design principle based on the core-periphery property of the human brain network to guide the design of CNNs. Our work draws inspiration from recent studies suggesting that artificial and biological neural networks may have common principles in optimizing network architecture. We implement the core-periphery principle in the design of network wiring patterns and the sparsification of the convolution operation. The resulting core-periphery principle guided CNNs (CP-CNNs) are evaluated on three different datasets. The experiments demonstrate the effectiveness and superiority compared to CNNs and ViT-based methods. Overall, our work contributes to the growing field of brain-inspired AI by incorporating insights from the human brain into the design of neural networks.
- North America > United States > Georgia > Clarke County > Athens (0.14)
- North America > United States > Texas > Tarrant County > Arlington (0.04)
Scalable Machine Learning with Spark
Since the early 2000s, the amount of data collected has increased enormously due to the advent of internet giants such as Google, Netflix, Youtube, Amazon, Facebook, etc. Near to 2010, another "data wave" had come about when mobile phones became hugely popular. In 2020s, we anticipate another exponential rise in data when IoT devices become all-pervasive. Given this backdrop, building scalable systems becomes a sine qua non for machine learning solutions. Pre-2005, parallel processing libraries like MPI and PVM were popular for compute heavy tasks, based on which TensorFlow was designed later. Hence, the design was aimed to reduce data redundancy, by dividing larger tables into smaller tables, and link them using relationships (Normalization).